Goto

Collaborating Authors

 psychological safety


Creating psychological safety in the AI era

MIT Technology Review

Trust in AI begins when leaders admit what they do not know, address fears, and help people adapt. Rolling out enterprise-grade AI means climbing two steep cliffs at once. And second, creating the cultural conditions where employees can maximize its value. While the technical hurdles are significant, the human element can be even more consequential; fear and ambiguity can stall momentum of even the most promising initiatives. Psychological safety--feeling free to express opinions and take calculated risks without worrying about career repercussions1--is essential for successful AI adoption. In psychologically safe workspaces, employees are empowered to challenge assumptions and raise concerns about new tools without fear of reprisal.


Falconry-like palm landing by a flapping-wing drone based on the human gesture interaction and distance-aware flight planning

Numazato, Kazuki, Kan, Keiichiro, Kitagawa, Masaki, Li, Yunong, Kubel, Johannes, Zhao, Moju

arXiv.org Artificial Intelligence

Flapping-wing drones have attracted significant attention due to their biomimetic flight. They are considered more human-friendly due to their characteristics such as low noise and flexible wings, making them suitable for human-drone interactions. However, few studies have explored the practical interaction between humans and flapping-wing drones. On establishing a physical interaction system with flapping-wing drones, we can acquire inspirations from falconers who guide birds of prey to land on their arms. This interaction interprets the human body as a dynamic landing platform, which can be utilized in various scenarios such as crowded or spatially constrained environments. Thus, in this study, we propose a falconry-like interaction system in which a flapping-wing drone performs a palm landing motion on a human hand. To achieve a safe approach toward humans, we design a trajectory planning method that considers both physical and psychological factors of the human safety such as the drone's velocity and distance from the user. We use a commercial flapping platform with our implemented motion planning and conduct experiments to evaluate the palm landing performance and safety. The results demonstrate that our approach enables safe and smooth hand landing interactions. To the best of our knowledge, it is the first time to achieve a contact-based interaction between flapping-wing drones and humans.


Sponsored post: Should robots have a voice in society?

#artificialintelligence

For the past few decades, robots have been confined to the factory floor. Robotic arms, concealed in big industrial buildings, weld cars, inspected items on conveyor belts and build complicated things. This is all well hidden behind closed doors. And for good reason -- industrial robots are bulky, limited, and sometimes dangerous. As robot tech and AI have advanced, robots have exploded into pedestrian spaces.


"They Weren't Even Treating Me Like a Person": A Black Tech Ethicist on Leaving Google

Slate

Earlier this fall, A.I. ethicist Timnit Gebru submitted a paper for consideration at an academic conference about predictive language models: on their environmental cost, and how they could learn racist and sexist language and also spread misinformation. Since she was working for Google, the company first wanted to review the paper--which Gebru wrote with several of her colleagues--and sign off on it. She was then told by senior managers that the paper didn't meet Google's publication bar, and that she should retract it or remove the names of Google employees. Gebru wanted more clarity on why they wanted it retracted and said that if Google couldn't provide that information, she would resign. This kicked off a few days of wrangling and several intense emails--until a manager emailed Gebru's boss, saying they had accepted her resignation.